Skip to content
This repository was archived by the owner on Jan 15, 2024. It is now read-only.

Conversation

@bgawrych
Copy link

@bgawrych bgawrych commented Oct 6, 2021

Description

This PR enables quantization on question answering scripts.
Added custom calibration collector to avoid significant accuracy drop

@bgawrych bgawrych requested a review from a team as a code owner October 6, 2021 12:03
Copy link

@bartekkuncer bartekkuncer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me.

@github-actions
Copy link

github-actions bot commented Oct 6, 2021

answerable_logits
"""
backbone_net = self.backbone
if self.quantized:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if self.quantized:
if self.quantized_bacbone is not None:

end remove quantized ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about it, but I remained this quantized flag as switch on/off of quantized model - not sure if it is really usable. What do you think?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems to be not usable for me.

@github-actions
Copy link

github-actions bot commented Oct 8, 2021

@github-actions
Copy link

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants